Maximisation of stability ranges for recurrent neural networks subject to on-line adaptation
نویسندگان
چکیده
We present conditions for absolute stability of recurrent neural networks with time-varying weights based on the Popov theorem from non-linear feedback system theory. We show how to maximise the stability bounds by deriving a convex optimisation problem subject to linear matrix inequality constraints, which can efficiently be solved by interior point methods with standard software.
منابع مشابه
Robust stability of stochastic fuzzy impulsive recurrent neural networks with\ time-varying delays
In this paper, global robust stability of stochastic impulsive recurrent neural networks with time-varyingdelays which are represented by the Takagi-Sugeno (T-S) fuzzy models is considered. A novel Linear Matrix Inequality (LMI)-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of uncertain fuzzy stochastic impulsive recurrent neural...
متن کاملInput-Output Stability of Recurrent Neural Networks with Time-Varying Parameters
We provide input-output stability conditions for additive recurrent neural networks regarding them as dynamical operators between their input and output function spaces. The stability analysis is based on methods from non-linear feedback system theory and includes the case of time-varying weights, for instance introduced by on-line adaptation. The results assure that there are regions in weight...
متن کاملIntrinsic Stability-Control Method for Recursive Filters and Neural Networks
Linear recursive filters can be adapted on-line but with instability problems. Stability-control techniques exist, but they are either computationally expensive or nonrobust. For the nonlinear case, e.g., locally recurrent neural networks, the stability of infinite-impulse response (IIR) synapses is often a condition to be satisfied. This brief considers the known reparametrization-for-stabilit...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملEfficient Short-Term Electricity Load Forecasting Using Recurrent Neural Networks
Short term load forecasting (STLF) plays an important role in the economic and reliable operation ofpower systems. Electric load demand has a complex profile with many multivariable and nonlineardependencies. In this study, recurrent neural network (RNN) architecture is presented for STLF. Theproposed model is capable of forecasting next 24-hour load profile. The main feature in this networkis ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999